Union opposes CNET’s AI journalism regulations
CNET has altered its policy on the use of AI in its journalism after almost seven months of publishing machine-generated stories without revealing their origin to readers. The company’s in-house AI, known as Responsible AI Machine Partner (RAMP), will no longer be used to write articles. However, AI remains a presence in CNET’s newsroom.
However, the site states that RAMP is still used in two broad hobbies. The first, titled “Organizing Large Amounts of Data,” provides an example that seems more compact than the umbrella diagram allows. “RAMP helps us sort through things like price and availability data and present it in ways that tailor that information to specific audiences. Without the help of AI, this workload wouldn’t be possible.”
The flip side (“speeding up certain search and administrative parts of our workflow”) is more troubling. The guidelines state, “CNET editors may use artificial intelligence to automate some parts of our work so we can focus on the parts that bring the most unique value.” RAMP can also create content such as illustrative material (based on trusted sources) that people can review and edit. [Focus on what we’ve got]” You’d be forgiven if this sounds almost like what got CNET into trouble in the first place.
A venerable tech site published an innocent explanation titled “What Are Credit Card Fees?” on November 11, 2022 under the headline “CNET Money Staff” without further explanation of its source, and went on to publish dozens of other microfinance stories on that byline through mid-January. At the time, Futurism discovered two important details: The CNET Money Staff stories were created by artificial intelligence, and much of the work was wildly inaccurate. CNET has issued corrections to more than half of these stories and has apparently stopped using such tools in response to the criticism it has earned.
Meanwhile, the rest of the CNET staff has publicly announced their intention to merge with the Writer’s Guide to America, East. The bargaining unit also objected to the site’s intention to continue implementing AI.
Based on the union’s response on Twitter, the guidelines fall far short of the protection CNET employees hoped for. “Prior to the release of the tool, our association looks forward to negotiations,” they wrote. How and what information is sought; regular role in testing/reassessment tool; the right to refuse and remove sidelines; vote to ensure editorial integrity.
New AI policy @CNET affects workers. Before the tool rolls out, our union looks forward to negotiating: how & what data is retrieved; a regular role in testing/reevaluating tool; right to opt out & remove bylines; a voice to ensure editorial integrity. https://t.co/7FQFWhRoui
— CNET Media Workers Union (@cnetunion) June 6, 2023
CNET certainly claims that it never publishes RAMP to write full stories, although it also denies that it ever did. However, the new guidelines leave open that possibility, as well as the possibility of using AI to create images or videos, promising only that “if the text comes from our AI tool, we will include that information in the notice.” CNET shows optimism. Apparent AI (and his crew) also warns against the background of news organizations who want to broadly escape the technology’s potential ill effects. The New York Times and other media groups began preliminary discussions this week to discuss the role of artificial intelligence in disinformation and plagiarism, and how to ensure fair compensation when authorship becomes unclear.
Previous articles by CNET Money staff have since been updated to reflect the new editorial guidelines. Each cites the human staffer who rewrote the story and also lists the supervising editor’s name. Each is now accompanied by the following note at the bottom of the “Editors’ Note”: An earlier version of this article was powered by an AI engine. This version has been significantly updated by a staff writer.
This type of primary disclosure is neither difficult nor unusual. Including a data source was a tenet of journalism long before artificial intelligence was advanced enough to warrant masthead ratings, and the Associated Press included such revelations in its cut-and-paste financial stories. For the better part of a decade. On the other hand, much of CNET’s confusion could have been avoided if he had simply alerted readers to the source text of these stories at the outset. The biggest concern, however, is that unlike the AP’s use of these tools, CNET seems willing to give RAMP more freedom to make more content, the boundaries of which are not meaningfully altered by these guidelines.